A generalized convergence theorem for neural networks
نویسندگان
چکیده
منابع مشابه
A generalized convergence theorem for neural networks
New sampling theorems are developed for isotropic randomfields and their associated Fourier coefficient processes. A wavenumber-limited isotropic random field z ( ? ) is considered whose spectral densityfunction is zero outside a disk of radius B centered at the origin of the Manuscript received September 3, 1986; revised February 19, 1988. Thiswork was supported in part by the ...
متن کاملA computational model and convergence theorem for rumor dissemination in social networks
The spread of rumors, which are known as unverified statements of uncertain origin, may threaten the society and it's controlling, is important for national security councils of countries. If it would be possible to identify factors affecting spreading a rumor (such as agents’ desires, trust network, etc.) then, this could be used to slow down or stop its spreading. Therefore, a computational m...
متن کاملLearning in neural networks based on a generalized fluctuation theorem.
Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universa...
متن کاملGENERALIZED PRINCIPAL IDEAL THEOREM FOR MODULES
The Generalized Principal Ideal Theorem is one of the cornerstones of dimension theory for Noetherian rings. For an R-module M, we identify certain submodules of M that play a role analogous to that of prime ideals in the ring R. Using this definition, we extend the Generalized Principal Ideal Theorem to modules.
متن کاملA representer theorem for deep neural networks
We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Information Theory
سال: 1988
ISSN: 0018-9448,1557-9654
DOI: 10.1109/18.21239